Yamlalias: Matrices, Matrix
We shouldn't think of it as a list of scalars.
A matrix is simmetrical if:
Rows will disappear.
print(A, end='\n\n') # Possible to do reduction on the matrix (sum along rows)
A_c = A.sum(axis=0, keepdims=False) # 3 (rows are canceled out)
A_c.shape print(A_c)
Output:
[[0 1 2]
[3 4 5]
[6 7 8]]
[ 9 12 15]
You flatten the axes, so you get something that is 1x3. You flatten the rows.
The axis where you do the operation is said to be flattened.
How do we translate a point cloud to the origin?
We compute the mean of the point cloud and then we subtract it from all the point.
The new mean of the point cloud is 0.
We do multiplications element wise. In python, it is done by the operator "*", between matrices.
They actually map a space into another space. Or an input vector to an output vector.
We are gonna take the basis vectors, and we are gonna change them. Then the operation can be done for every other point/vector.
It's basically just a change of basis vectors.
We are given the vector:
Now, the whole point of a linear transformation is to change the basis vectors, so we do:
The result is:
The linear combination of our scaled new basis vectors is then our transformed vector:
This transformation can be represented as a matrix:
Often, other people collapse the last part into a single vector whose elements are yet to be calculated.
Sometimes they can induce severe distortion when the two vectors lie in the same direction.
Now that we have established matrices as transformations, we can merge two matrices/transformations into just 1 matrix, that is called a composition of matrices.
Basically, it goes from right to left, so in this case we first apply rotation and then shear.
When we apply rotation, we have that:
We now transform the basis vectors of the new j and i, to see where they land, and that will be our final combination.
Basically, we must substitute the basis vectors of the new j and i with the columns of the shear:
The result is the matrix:
The determinant of a matrix/transformation is the factor by which any area is scaled by the transformation.
The determinant also allows for negative values.
Negative values happen when the space is "flipped", for example when the unit vectors cross each other.
A determinant of 0 is means that dimensions have collapsed into a single one, so for example a volume has been squished into a plane or an area has been squished into a single line.
This is what happens when the vectors/column of the matrix are not linearly indipendent.
With rectangular matrices we can actually erase dimensions.
Example:
We can see that one dimension gets erased, here's the math:
Mathematically, you can see that it makes perfect sense.
We are basically just scaling the new unit vectors by the scalars of the vector.
We can also see that the scalar for the third unit vector still influences the rest of the plane.
The main component of the third unit vector is not there anymore, which means that the dimension is flat but the rest of the components still influence the other dimensions.
It should be equivalent to:
Here is a rectangular matrix that just kills one dimension and doesn't change or scale anything else:
Here the main components of the unit vectors are all still 1, but not the third one.
It should be the equivalent of: